probabilistic mapping
A unified view on differential privacy and robustness to adversarial examples
Pinot, Rafael, Yger, Florian, Gouy-Pailler, Cédric, Atif, Jamal
This short note highlights some links between two lines of research within the emerging topic of trustworthy machine learning: differential privacy and robustness to adversarial examples. By abstracting the definitions of both notions, we show that they build upon the same theoretical ground and hence results obtained so far in one domain can be transferred to the other. More precisely, our analysis is based on two key elements: probabilistic mappings (also called randomized algorithms in the differential privacy community), and the Renyi divergence which subsumes a large family of divergences. We first generalize the definition of robustness against adversarial examples to encompass probabilistic mappings. Then we observe that Renyi-differential privacy (a generalization of differential privacy recently proposed in~\cite{Mironov2017RenyiDP}) and our definition of robustness share several similarities. We finally discuss how can both communities benefit from this connection to transfer technical tools from one research field to the other.
- North America > United States > California > Santa Barbara County > Santa Barbara (0.04)
- North America > United States > California > Alameda County > Berkeley (0.04)
- Europe > France > Île-de-France > Paris > Paris (0.04)
Theoretical evidence for adversarial robustness through randomization: the case of the Exponential family
Pinot, Rafael, Meunier, Laurent, Araujo, Alexandre, Kashima, Hisashi, Yger, Florian, Gouy-Pailler, Cédric, Atif, Jamal
This paper investigates the theory of robustness against adversarial attacks. It focuses on the family of randomization techniques that consist in injecting noise in the network at inference time. These techniques have proven effective in many contexts, but lack theoretical arguments. We close this gap by presenting a theoretical analysis of these approaches, hence explaining why they perform well in practice. More precisely, we provide the first result relating the randomization rate to robustness to adversarial attacks. This result applies for the general family of exponential distributions, and thus extends and unifies the previous approaches. We support our theoretical claims with a set of experiments.
The Emerging World of Neural Net Driven MT
Originally posted here, where you can see all the graphics. There has been much in the news lately about the next wave of MT technology driven by a technology called deep learning and neural nets (DNN). I will attempt to provide a brief layman's overview about what this is, even though I am barely qualified to do this (but if Trump can run for POTUS then surely my trying to do this is less of a stretch). Please feel free to correct me if I have inadvertently made errors here. To understand deep learning and neural nets it is useful to first understand what "machine learning" is. Very succinctly stated machine learning is the "Field of study that gives computers the ability to learn without being explicitly programmed" according to Arthur Samuel. Machine learning is a sub-field of computer science that evolved from the study of pattern recognition and computational learning theory in artificial intelligence.